53 research outputs found
An Energy-based Approach to Ensure the Stability of Learned Dynamical Systems
Non-linear dynamical systems represent a compact, flexible, and robust tool
for reactive motion generation. The effectiveness of dynamical systems relies
on their ability to accurately represent stable motions. Several approaches
have been proposed to learn stable and accurate motions from demonstration.
Some approaches work by separating accuracy and stability into two learning
problems, which increases the number of open parameters and the overall
training time. Alternative solutions exploit single-step learning but restrict
the applicability to one regression technique. This paper presents a
single-step approach to learn stable and accurate motions that work with any
regression technique. The approach makes energy considerations on the learned
dynamics to stabilize the system at run-time while introducing small deviations
from the demonstrated motion. Since the initial value of the energy injected
into the system affects the reproduction accuracy, it is estimated from
training data using an efficient procedure. Experiments on a real robot and a
comparison on a public benchmark shows the effectiveness of the proposed
approach.Comment: Accepted at the International Conference on Robotics and Automation
202
Antifragile Control Systems: The case of mobile robot trajectory tracking in the presence of uncertainty
Mobile robots are ubiquitous. Such vehicles benefit from well-designed and
calibrated control algorithms ensuring their task execution under precise
uncertainty bounds. Yet, in tasks involving humans in the loop, such as elderly
or mobility impaired, the problem takes a new dimension. In such cases, the
system needs not only to compensate for uncertainty and volatility in its
operation but at the same time to anticipate and offer responses that go beyond
robust. Such robots operate in cluttered, complex environments, akin to human
residences, and need to face during their operation sensor and, even, actuator
faults, and still operate. This is where our thesis comes into the foreground.
We propose a new control design framework based on the principles of
antifragility. Such a design is meant to offer a high uncertainty anticipation
given previous exposure to failures and faults, and exploit this anticipation
capacity to provide performance beyond robust. In the current instantiation of
antifragile control applied to mobile robot trajectory tracking, we provide
controller design steps, the analysis of performance under parametrizable
uncertainty and faults, as well as an extended comparative evaluation against
state-of-the-art controllers. We believe in the potential antifragile control
has in achieving closed-loop performance in the face of uncertainty and
volatility by using its exposures to uncertainty to increase its capacity to
anticipate and compensate for such events
Learning Barrier Functions for Constrained Motion Planning with Dynamical Systems
Stable dynamical systems are a flexible tool to plan robotic motions in
real-time. In the robotic literature, dynamical system motions are typically
planned without considering possible limitations in the robot's workspace. This
work presents a novel approach to learn workspace constraints from human
demonstrations and to generate motion trajectories for the robot that lie in
the constrained workspace. Training data are incrementally clustered into
different linear subspaces and used to fit a low dimensional representation of
each subspace. By considering the learned constraint subspaces as zeroing
barrier functions, we are able to design a control input that keeps the system
trajectory within the learned bounds. This control input is effectively
combined with the original system dynamics preserving eventual asymptotic
properties of the unconstrained system. Simulations and experiments on a real
robot show the effectiveness of the proposed approach
Merging Position and Orientation Motion Primitives
In this paper, we focus on generating complex robotic trajectories by merging
sequential motion primitives. A robotic trajectory is a time series of
positions and orientations ending at a desired target. Hence, we first discuss
the generation of converging pose trajectories via dynamical systems, providing
a rigorous stability analysis. Then, we present approaches to merge motion
primitives which represent both the position and the orientation part of the
motion. Developed approaches preserve the shape of each learned movement and
allow for continuous transitions among succeeding motion primitives. Presented
methodologies are theoretically described and experimentally evaluated, showing
that it is possible to generate a smooth pose trajectory out of multiple motion
primitives
Learning Deep Robotic Skills on Riemannian manifolds
In this paper, we propose RiemannianFlow, a deep generative model that allows
robots to learn complex and stable skills evolving on Riemannian manifolds.
Examples of Riemannian data in robotics include stiffness (symmetric and
positive definite matrix (SPD)) and orientation (unit quaternion (UQ))
trajectories. For Riemannian data, unlike Euclidean ones, different dimensions
are interconnected by geometric constraints which have to be properly
considered during the learning process. Using distance preserving mappings, our
approach transfers the data between their original manifold and the tangent
space, realizing the removing and re-fulfilling of the geometric constraints.
This allows to extend existing frameworks to learn stable skills from
Riemannian data while guaranteeing the stability of the learning results. The
ability of RiemannianFlow to learn various data patterns and the stability of
the learned models are experimentally shown on a dataset of manifold motions.
Further, we analyze from different perspectives the robustness of the model
with different hyperparameter combinations. It turns out that the model's
stability is not affected by different hyperparameters, a proper combination of
the hyperparameters leads to a significant improvement (up to 27.6%) of the
model accuracy. Last, we show the effectiveness of RiemannianFlow in a real
peg-in-hole (PiH) task where we need to generate stable and consistent position
and orientation trajectories for the robot starting from different initial
poses
Learning Stable Robotic Skills on Riemannian Manifolds
In this paper, we propose an approach to learn stable dynamical systems
evolving on Riemannian manifolds. The approach leverages a data-efficient
procedure to learn a diffeomorphic transformation that maps simple stable
dynamical systems onto complex robotic skills. By exploiting mathematical tools
from differential geometry, the method ensures that the learned skills fulfill
the geometric constraints imposed by the underlying manifolds, such as unit
quaternion (UQ) for orientation and symmetric positive definite (SPD) matrices
for impedance, while preserving the convergence to a given target. The proposed
approach is firstly tested in simulation on a public benchmark, obtained by
projecting Cartesian data into UQ and SPD manifolds, and compared with existing
approaches. Apart from evaluating the approach on a public benchmark, several
experiments were performed on a real robot performing bottle stacking in
different conditions and a drilling task in cooperation with a human operator.
The evaluation shows promising results in terms of learning accuracy and task
adaptation capabilities.Comment: 16 pages, 10 figures, journa
Imitation Learning-based Visual Servoing for Tracking Moving Objects
In everyday life collaboration tasks between human operators and robots, the
former necessitate simple ways for programming new skills, the latter have to
show adaptive capabilities to cope with environmental changes. The joint use of
visual servoing and imitation learning allows us to pursue the objective of
realizing friendly robotic interfaces that (i) are able to adapt to the
environment thanks to the use of visual perception and (ii) avoid explicit
programming thanks to the emulation of previous demonstrations. This work aims
to exploit imitation learning for the visual servoing paradigm to address the
specific problem of tracking moving objects. In particular, we show that it is
possible to infer from data the compensation term required for realizing the
tracking controller, avoiding the explicit implementation of estimators or
observers. The effectiveness of the proposed method has been validated through
simulations with a robotic manipulator.Comment: International Workshop on Human-Friendly Robotics (HFR), 202
- …